chatgpt write
Should ChatGPT Write Your Breakup Text? Exploring the Role of AI in Relationship Dissolution
Fu, Yue, Chen, Yixin, Lai, Zelia Gomes Da Costa, Hiniker, Alexis
Relationships are essential to our happiness and wellbeing. The dissolution of a relationship, the final stage of relationship's lifecycle and one of the most stressful events in an individual's life, can have profound and long-lasting impacts on people. With the breakup process increasingly facilitated by computer-mediated communication (CMC), and the likely future influence of AI-mediated communication (AIMC) tools, we conducted a semi-structured interview study with 21 participants. We aim to understand: 1) the current role of technology in the breakup process, 2) the needs and support individuals have during the process, and 3) how AI might address these needs. Our research shows that people have distinct needs at various stages of ending a relationship. Presently, technology is used for information gathering and community support, acting as a catalyst for breakups, enabling ghosting and blocking, and facilitating communication. Participants anticipate that AI could aid in sense-making of their relationship leading up to the breakup, act as a mediator, assist in crafting appropriate wording, tones, and language during breakup conversations, and support companionship, reflection, recovery, and growth after a breakup. Our findings also demonstrate an overlap between the breakup process and the Transtheoretical Model (TTM) of behavior change. Through the lens of TTM, we explore the potential support and affordances AI could offer in breakups, including its benefits and the necessary precautions regarding AI's role in this sensitive process.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- (2 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Education (0.93)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.65)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.65)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.41)
Abandoned America: AI images what famous US cities would look like after 100 years - if they were deserted by humans
What would American cities look like 100 years after human beings have left, with the streets devoid of human life - and beginning to be reclaimed by nature? While the chatbot put our future world in text, the AI photo generator Midjourney painted pictures of these abandoned metropolises, showing the concrete jungles transforming into jungles. Kieron Connolly, author of Abandoned Places and Abandoned Civilizations, says that visions of abandoned cities have a unique power. This isn't what city life is supposed to look like. Nature is allowed to reclaim the land,' Connolly said. ChatGPT writes, 'In the year 2123, the once-thriving metropolis of Chicago stands as a haunting testament to the passage of time and the resilience of nature.
- North America > United States > Illinois > Cook County > Chicago (0.26)
- North America > United States > New York (0.07)
- North America > United States > California > Los Angeles County > Los Angeles (0.07)
- (3 more...)
ChatGPT writes convincing fake scientific abstracts that fool reviewers in study
Could the new and wildly popular chatbot ChatGPT convincingly produce fake abstracts that fool scientists into thinking those studies are the real thing? That was the question worrying Northwestern Medicine physician-scientist Dr. Catherine Gao when she designed a study--collaborating with University of Chicago scientists--to test that theory. Yes, scientists can be fooled, their new study reports. Blinded human reviewers--when given a mix real and falsely generated abstracts--could only spot ChatGPT generated abstracts 68% of the time. The reviewers also incorrectly identified 14% of real abstracts as being AI generated.